456 research outputs found

    IL SISTEMA INTEGRATO DI VALUTAZIONE DELLA PERFORMANCE NELLE AZIENDE SANITARIE. IL CASO DELLA A.S.L. n.5 "SPEZZINO"

    Get PDF
    Nel Capitolo I viene introdotto il concetto di "Performance", analizzato nelle sue diverse declinazione con particolare riferimento alla valutazione della performance e alle sue peculiarità all'interno del contesto pubblico. Nel Capitolo II la Performance e la sua valutazione viene analizzata focalizzandosi sull'ambito sanitario e sulle sue peculiarità e problematiche collegate. Nel Capitolo II viene descritto il contesto normativo di riferimento per la valutazione della performance in ambito sanitario, con particolare attenzione al D.Lgs n. 150/2009. Il Capitolo IV introduce il caso pratico, andando ad analizzare il contesto in cui è inserita l'A.S.L. oggetto di analisi. Nel Capitolo V viene analizzato il processo di progettazione e implementazione del sistema integrato di valutazione della performance della A.S.L. 5 "Spezzino", ponendo particolare attenzione al suo collegamento con il processo di programmazione aziendale

    Learn to See by Events: Color Frame Synthesis from Event and RGB Cameras

    Get PDF
    Event cameras are biologically-inspired sensors that gather the temporal evolution of the scene. They capture pixel-wise brightness variations and output a corresponding stream of asynchronous events. Despite having multiple advantages with respect to traditional cameras, their use is partially prevented by the limited applicability of traditional data processing and vision algorithms. To this aim, we present a framework which exploits the output stream of event cameras to synthesize RGB frames, relying on an initial or a periodic set of color key-frames and the sequence of intermediate events. Differently from existing work, we propose a deep learning-based frame synthesis method, consisting of an adversarial architecture combined with a recurrent module. Qualitative results and quantitative per-pixel, perceptual, and semantic evaluation on four public datasets confirm the quality of the synthesized images

    Study, automation and planning of micromachining processes based on infrared pulsed Fiber Laser

    Get PDF
    Short-pulsed Fiber Lasers represent an ideal solution for many micromachining operations due to their high quality laser beam and strong focusability. In this thesis, micromachining processes based on infrared pulsed Fiber Laser were investigated. A laser micromachining setup based on a 10 W Yb-doped pulsed Fiber laser source was designed, integrated and automated in order to conduct experimental activity. A new approach to part programming for laser micromachining based on syntax-free, non-structured natural language text was proposed. Experimental work was conducted by means of the pulsed Fiber Laser micromachining setup on metal and non-metal surfaces. The experiments proved that Fiber Lasers are well suited to the micromachining tasks

    Automatic Image Cropping and Selection using Saliency: an Application to Historical Manuscripts

    Get PDF
    Automatic image cropping techniques are particularly important to improve the visual quality of cropped images and can be applied to a wide range of applications such as photo-editing, image compression, and thumbnail selection. In this paper, we propose a saliency-based image cropping method which produces significant cropped images by only relying on the corresponding saliency maps. Experiments on standard image cropping datasets demonstrate the benefit of the proposed solution with respect to other cropping methods. Moreover, we present an image selection method that can be effectively applied to automatically select the most representative pages of historical manuscripts thus improving the navigation of historical digital libraries

    Evaluation of DLC, WC/C, and TiN Coatings on Martensitic Stainless Steel and Yttria-Stabilized Tetragonal Zirconia Polycrystal Substrates for Reusable Surgical Scalpels

    Get PDF
    DLC, WC/C, and TiN coated SF 100 martensitic stainless steel and Yttria-Stabilized Tetragonal Zirconia Polycrystal (Y-TZP) surgical scalpels were tested, characterized, and comparatively evaluated with regard to chemical leach, micromorphology, and mechanical properties in order to evaluate their suitability as reusable surgical scalpels. Vickers microhardness (HV), Scratch Hardness Number (), and sharpening by grinding and cutting capabilities of all the coated scalpels were deemed appropriate for reusable surgical scalpels. However, coated Y-TZP scalpels demonstrated higher Vickers microhardness than martensitic stainless steel scalpels coated with the same coatings, except DLC coating on Y-TZP substrate that presented less adhesion than the other coatings. Uncoated and coated martensitic stainless steel scalpels presented corrosion and chemical leach when soaked for a defined period of time in a simulant physiological saline solution, while uncoated and coated Y-TZP scalpels did not present these drawbacks. Therefore, DLC, WC/C, and TiN coated SF 100 martensitic stainless steel surgical scalpels are unsuitable as reusable surgical scalpels, limiting their application to disposable scalpels only, as the uncoated ones, despite their higher microhardness and expected longer cutting capability duration. Based on these experimental results, WC/C and TiN coated Y-TZP scalpels can be proposed as candidates for reusable surgical scalpel applications

    Semi-Perspective Decoupled Heatmaps for 3D Robot Pose Estimation from Depth Maps

    Get PDF
    Knowing the exact 3D location of workers and robots in a collaborative environment enables several real applications, such as the detection of unsafe situations or the study of mutual interactions for statistical and social purposes. In this paper, we propose a non-invasive and light-invariant framework based on depth devices and deep neural networks to estimate the 3D pose of robots from an external camera. The method can be applied to any robot without requiring hardware access to the internal states. We introduce a novel representation of the predicted pose, namely Semi-Perspective Decoupled Heatmaps (SPDH), to accurately compute 3D joint locations in world coordinates adapting efficient deep networks designed for the 2D Human Pose Estimation. The proposed approach, which takes as input a depth representation based on XYZ coordinates, can be trained on synthetic depth data and applied to real-world settings without the need for domain adaptation techniques. To this end, we present the SimBa dataset, based on both synthetic and real depth images, and use it for the experimental evaluation. Results show that the proposed approach, made of a specific depth map representation and the SPDH, overcomes the current state of the art

    Multi-Category Mesh Reconstruction From Image Collections

    Get PDF
    Recently, learning frameworks have shown the capability of inferring the accurate shape, pose, and texture of an object from a single RGB image. However, current methods are trained on image collections of a single category in order to exploit specific priors, and they often make use of category-specific 3D templates. In this paper, we present an alternative approach that infers the textured mesh of objects combining a series of deformable 3D models and a set of instance-specific deformation, pose, and texture. Differently from previous works, our method is trained with images of multiple object categories using only foreground masks and rough camera poses as supervision. Without specific 3D templates, the framework learns category-level models which are deformed to recover the 3D shape of the depicted object. The instance-specific deformations are predicted independently for each vertex of the learned 3D mesh, enabling the dynamic subdivision of the mesh during the training process. Experiments show that the proposed framework can distinguish between different object categories and learn category-specific shape priors in an unsupervised manner. Predicted shapes are smooth and can leverage from multiple steps of subdivision during the training process, obtaining comparable or state-of-the-art results on two public datasets. Models and code are publicly released

    Mercury: a vision-based framework for Driver Monitoring

    Get PDF
    In this paper, we propose a complete framework, namely Mercury, that combines Computer Vision and Deep Learning algorithms to continuously monitor the driver during the driving activity. The proposed solution complies to the require-ments imposed by the challenging automotive context: the light invariance, in or-der to have a system able to work regardless of the time of day and the weather conditions. Therefore, infrared-based images, i.e. depth maps (in which each pixel corresponds to the distance between the sensor and that point in the scene), have been exploited in conjunction with traditional intensity images. Second, the non-invasivity of the system is required, since driver’s movements must not be impeded during the driving activity: in this context, the use of camer-as and vision-based algorithms is one of the best solutions. Finally, real-time per-formance is needed since a monitoring system must immediately react as soon as a situation of potential danger is detected
    • …
    corecore